674 research outputs found

    Floquet multipliers and the stability of periodic linear differential equations: a unified algorithm and its computer realization

    Full text link
    Floquet multipliers (characteristic multipliers) play significant role in the stability of the periodic equations. Based on the iterative method, we provide a unified algorithm to compute the Floquet multipliers (characteristic multipliers) and determine the stability of the periodic linear differential equations on time scales unifying discrete, continuous, and hybrid dynamics. Our approach is based on calculating the value of A and B (see Theorem 3.1), which are the sum and product of all Floquet multipliers (characteristic multipliers) of the system, respectively. We obtain an explicit expression of A (see Theorem 4.1) by the method of variation and approximation theory and an explicit expression of B by Liouville's formula. Furthermore, a computer program is designed to realize our algorithm. Specifically, you can determine the stability of a second order periodic linear system, whether they are discrete, continuous or hybrid, as long as you enter the program codes associated with the parameters of the equation. In fact, few literatures have dealt with the algorithm to compute the Floquet multipliers, not mention to design the program for its computer realization. Our algorithm gives the explicit expressions of all Floquet multipliers and our computer program is based on the approximations of these explicit expressions. In particular, on an arbitrary discrete periodic time scale, we can do a finite number of calculations to get the explicit value of Floquet multipliers (see Theorem 4.2). Therefore, for any discrete periodic system, we can accurately determine the stability of the system even without computer! Finally, in Section 6, several examples are presented to illustrate the effectiveness of our algorithm

    Improving the Performance of Online Neural Transducer Models

    Full text link
    Having a sequence-to-sequence model which can operate in an online fashion is important for streaming applications such as Voice Search. Neural transducer is a streaming sequence-to-sequence model, but has shown a significant degradation in performance compared to non-streaming models such as Listen, Attend and Spell (LAS). In this paper, we present various improvements to NT. Specifically, we look at increasing the window over which NT computes attention, mainly by looking backwards in time so the model still remains online. In addition, we explore initializing a NT model from a LAS-trained model so that it is guided with a better alignment. Finally, we explore including stronger language models such as using wordpiece models, and applying an external LM during the beam search. On a Voice Search task, we find with these improvements we can get NT to match the performance of LAS
    • …
    corecore